PQ Guide: HAProxy

February 12, 2026

HAProxy sits in front of a lot of infrastructure. If you’re running it as a TLS termination point—and you probably are—the cryptography it negotiates is the cryptography protecting your traffic. So what does it take to get post-quantum key exchange working?

Less than you’d think. And more than vendors will tell you.

HAProxy Doesn’t Do Crypto

When someone opened GitHub issue #3171 requesting PQC support in HAProxy, the maintainers closed it as “works as designed.” It’s an architectural answer: HAProxy delegates all TLS operations to OpenSSL. It doesn’t implement cipher suites or key exchange algorithms itself.

No HAProxy PQC patch to wait for. No feature flag to enable. If the OpenSSL library underneath supports ML-KEM, HAProxy can negotiate it. The question is whether your OpenSSL supports it and whether your HAProxy config asks for it.

Two separate problems. People conflate them constantly.

Step 1: Get the Right OpenSSL

OpenSSL 3.5+ (released April 2025) has native PQC support. The default group list now sends two key shares: X25519MLKEM768 and X25519. If you link HAProxy against OpenSSL 3.5, hybrid PQC key exchange works out of the box.

For OpenSSL 3.0–3.4, you need the OQS provider. It works but adds build complexity. If you’re doing this fresh, go straight to 3.5.

Check what you’re running:

haproxy -vv | grep OpenSSL

If it says anything below 3.5, that’s your first problem.

Most distro-packaged HAProxy links against whatever OpenSSL the distro ships. Ubuntu 24.04 ships OpenSSL 3.0 and HAProxy 2.8—neither has what you need for native PQC. Debian 12 ships OpenSSL 3.0. RHEL 9 ships OpenSSL 3.0.

Your options: build HAProxy from source against OpenSSL 3.5 (and own that build forever—every security patch, every update), use a container image with the right library, or wait for distro upgrades. Waiting is the default enterprise strategy, which is why most HAProxy deployments will be negotiating classical key exchange well into 2027.

Step 2: Configure Both Sides

Having the right OpenSSL isn’t enough. You need to tell HAProxy which groups to offer. And you need to do it on both the frontend and backend.

For the frontend (client-facing):

global
    ssl-default-bind-curves X25519MLKEM768:X25519:secp256r1:secp384r1

Or per-bind:

frontend https
    bind *:443 ssl crt /etc/haproxy/certs/site.pem curves X25519MLKEM768:X25519:secp256r1:secp384r1

HAProxy selects the first group from this list that the client also supports. X25519MLKEM768 first means PQC when available, classical otherwise.

For the backend (origin server) side—and this is where people stop—HAProxy 2.9 added ssl-default-server-curves:

global
    ssl-default-server-curves X25519MLKEM768:X25519:secp256r1:secp384r1

If you’re running HAProxy as a TLS termination proxy that re-encrypts to backends, and you only configure the frontend, you’ve protected the last mile but left the first mile classical. This is the most common misconfiguration we see in PQC migration assessments.

Version check: ssl-default-server-curves requires HAProxy 2.9+. If you’re on 2.8 (Ubuntu 24.04’s default), that directive doesn’t exist. Mozilla’s ssl-config-generator actually had a bug where it generated ssl-default-server-curves for HAProxy 2.8 configs. If you copied a generated config and HAProxy rejected it, now you know why.

Step 3: Watch for the TCP Passthrough Trap

This is the failure mode everyone should be writing about.

If you use HAProxy in TCP mode for SNI-based routing—passing TLS through to backends without terminating it—PQC clients will start breaking your setup. GitHub issue #3148 documents this: with OpenSSL 3.5+ clients sending X25519MLKEM768 by default, the larger ClientHello (~1,700 bytes) can cause connection failures when HAProxy inspects it with req.ssl_sni.

The config that triggers it:

frontend ft_443
    mode tcp
    bind :443
    use_backend %[req.ssl_sni,lower,map_str(/etc/haproxy/maps.tls-sni-backends)]

The reporter saw roughly 10% of connections fail with TCP RST or FIN right after the ClientHello. The issue was closed as “works as designed”—same architectural answer. The req.ssl_sni sample fetch has to parse the ClientHello to extract the SNI, and the larger PQC key shares push ClientHello past single-packet boundaries.

This is the same class of bug documented at tldr.fail: software that assumes a ClientHello fits in one TCP read(). Classical ClientHellos are small enough that this assumption usually holds. PQC ClientHellos blow past it. The X25519MLKEM768 key share alone is 1,216 bytes, compared to 32 bytes for X25519.

If you’re running HAProxy in TCP passthrough mode: you need to test this with PQC-enabled clients before OpenSSL 3.5 becomes the default in major distros. By then your users will discover the problem for you.

Workaround: increase tune.bufsize to accommodate the larger ClientHello—the default 16,384 bytes is sufficient for PQC key shares, but if you’re seeing failures, bump to 32,768 (tune.bufsize 32768 in global). Or move to TLS termination mode. The HAProxy team has been working on buffer handling for large ClientHellos—check whether your version includes the fix.

Step 4: Verify

This is where configuration ends and the probing begins.

Setting curves X25519MLKEM768 in your config doesn’t mean it’s being negotiated. OpenSSL might silently fall back. A middlebox might strip the key share. The backend might not support it. You won’t know unless you check.

With pqprobe:

pqprobe scan your-haproxy-endpoint.example.com:443

You’re looking for X25519MLKEM768 in the negotiated group. If you see X25519 or secp256r1 instead, something between your config and the wire is wrong.

With openssl:

openssl s_client -connect your-endpoint:443 -tls1_3 2>&1 | grep "Server Temp Key"

For the backend side, scan your origin servers independently:

pqprobe scan your-origin-server:443

If they’re returning classical groups, configuring HAProxy’s server-side curves won’t change anything—the origin has to support PQC too.

Scan both sides. Compare. That’s the only way to know if you have end-to-end PQC or just a PQC facade on the frontend.

What About QUIC?

HAProxy’s QUIC support depends on QuicTLS—a fork of OpenSSL that hasn’t seen a new tag since the 3.3 fork in April 2024. QuicTLS does not have OpenSSL 3.5’s native PQC support. If you’re terminating QUIC on HAProxy, PQC key exchange isn’t available on that path.

Recent HAProxy commits reference compatibility work with OpenSSL 3.5’s new QUIC API, but the gap is real today. If you have PQC-critical paths, route them over TLS 1.3 on TCP for now. QUIC traffic is growing, and if your PQC migration plan only covers TLS on TCP, track this gap explicitly—it shouldn’t be a surprise when auditors ask.

Performance

For key exchange, the overhead is smaller than people expect. AWS measured hybrid X25519+ML-KEM against classical ECDH: roughly 1,600 additional bytes per handshake and 80–150 microseconds of additional compute. The X25519MLKEM768 key share is 1,216 bytes versus 32 bytes for X25519—bigger, but it’s a one-time handshake cost amortized across every request on that connection.

This is specifically about key exchange. Post-quantum signatures are a different story. ML-DSA-65 signatures are ~3,309 bytes versus ~72 bytes for ECDSA P-256. A certificate chain with ML-DSA signatures can hit 10–18KB and add hundreds of milliseconds from TCP fragmentation alone. But PQC certificate chains aren’t widely deployed yet—what you’re configuring in HAProxy today is hybrid key exchange with classical certificates. When PQC certs arrive, that’s a separate performance conversation.

For now, the bottleneck in real deployments is rarely the PQC key exchange math. It’s misconfigured TLS settings—disabled session resumption forcing full handshakes on every connection, for example—that amplify costs that would otherwise be negligible.

The Checklist

Prerequisites:

  • OpenSSL 3.5+ linked to HAProxy (haproxy -vv | grep OpenSSL)
  • HAProxy 2.9+ if you need backend PQC (ssl-default-server-curves)

Configure:

  • Frontend: ssl-default-bind-curves X25519MLKEM768:X25519:secp256r1:secp384r1
  • Backend: ssl-default-server-curves X25519MLKEM768:X25519:secp256r1:secp384r1

Test:

  • If using TCP passthrough with req.ssl_sni: test with PQC-enabled clients for connection failures
  • Scan frontend with pqprobe—confirm X25519MLKEM768 is negotiated
  • Scan backends independently—confirm origins support PQC too
  • Check QUIC endpoints separately—PQC not available there yet

The Bigger Picture

HAProxy demonstrates a pattern across the infrastructure stack: the software itself doesn’t need to “add PQC support.” It needs to link against a library that does, and then be configured to use it. The gap between “supported” and “deployed” is library versions, configuration directives, protocol-specific limitations, and operational edge cases that only surface under real traffic.

That gap is what pqprobe tracks. Not whether PQC is possible in your stack, but whether it’s actually happening—on every port, every protocol, and both sides of every proxy.


Sources:


Scan your HAProxy endpoints with pqprobe to see where you actually stand.